8 research outputs found

    LIDAR-Based road signs detection For Vehicle Localization in an HD Map

    Get PDF
    International audienceSelf-vehicle localization is one of the fundamental tasks for autonomous driving. Most of current techniques for global positioning are based on the use of GNSS (Global Navigation Satellite Systems). However, these solutions do not provide a localization accuracy that is better than 2-3 m in open sky environments [1]. Alternatively, the use of maps has been widely investigated for localization since maps can be pre-built very accurately. State of the art approaches often use dense maps or feature maps for localization. In this paper, we propose a road sign perception system for vehicle localization within a third party map. This is challenging since third party maps are usually provided with sparse geometric features which make the localization task more difficult in comparison to dense maps. The proposed approach extends the work in [2] where a localization system based on lane markings has been developed. Experiments have been conducted on a Highway-like test track using GNSS/INS with RTK corrections as ground truth (GT). Error evaluations are given as cross-track and along-track errors defined in the curvilinear coordinates [3] related to the map

    LIDAR-Based Lane Marking Detection For Vehicle Positioning in an HD Map

    Get PDF
    International audienceAccurate self-vehicle localization is an important task for autonomous driving and ADAS. Current GNSS-basedsolutions do not provide better than 2-3 m in open-sky environments. Moreover, map-based localization using HDmaps became an interesting source of information for intelligent vehicles. In this paper, a Map-based localization using a multi-layer LIDAR is proposed. Our method mainly relies on road lane markings and an HD map to achieve lane-level accuracy.At first, road points are segmented by analysing the geometric structure of each returned layer points. Secondly, thanks toLIDAR reflectivity data, road marking points are projected onto a 2D image and then detected using Hough Transform.Detected lane markings are then matched to our HD map using Particle Filter (PF) framework. Experiments are conducted on aHighway-like test track using GPS/INS with RTK correction as ground truth. Our method is capable of providing a lane-levellocalization with a 22 cm cross-track accuracy

    LIDAR-Based High Reflective Landmarks (HRL)s For Vehicle Localization in an HD Map

    Get PDF
    International audienceAccurate localization is very important to ensure performance and safety of autonomous vehicles. In particular, with the appearance of High Definition (HD) sparse geometric road maps, many research works have been focusing on the deployment of accurate localization systems in a previously built map. In this paper, we solve a localization problem by matching road perceptions from a 3D LIDAR sensor with HD map elements. The perception system detects High Reflective Landmarks (HRL) such as: lane markings, road signs and guard rail reflectors (GRR) from a 3D point cloud. A particle filtering algorithm estimates the position of the vehicle by matching observed HRLs with HD map attributes. The proposed approach extends our work in [1] and [2] where a localization system based on lane markings and road signs has been developed. Experiments have been conducted on a highway-like test track using GNSS/INS with RTK corrections as a ground truth (GT). Error evaluations are given as cross-track (CT) and along-track (AT) errors defined in the curvilinear coordinates [3] related to the map. The obtained accuracies of our localization system is 18 cm for the cross-track error and 32 cm for the along-track error

    3D cameras for the localization of a mobile platform in urban environment

    No full text
    L’objectif de la thèse est de développer un nouveau système de localisation mobile composé de trois caméras 3D de type Kinect et d’une caméra additionnelle de type Fish Eye. La solution algorithmique est basée sur l’odométrie visuelle et permet de calculer la trajectoire du mobile en temps réel à partir des données fournies par les caméras 3D. L’originalité de la méthodologie réside dans l’exploitation d’orthoimages créées à partir des nuages de points acquis en temps réel par les trois caméras. L’étude des différences entre les orthoimages successives acquises par le système mobile permet d’en déduire ses positions successives et d’en calculer sa trajectoire.The aim of the thesis is to develop a new kind of localization system, composed of three 3D cameras such as Kinect and an additional Fisheye camera. The localization algorithm is based on Visual Odometry principles in order to calculate the trajectory of the mobile platform in real time from the data provided by the 3D cameras.The originality of the processing method lies within the exploitation of orthoimages processed from the point clouds that are acquired in real time by the three cameras. The positions and trajectory of the mobile platform can be derived from the study of the differences between successive orthoimages

    Caméras 3D pour la localisation d'un système mobile en environnement urbain

    No full text
    The aim of the thesis is to develop a new kind of localization system, composed of three 3D cameras such as Kinect and an additional Fisheye camera. The localization algorithm is based on Visual Odometry principles in order to calculate the trajectory of the mobile platform in real time from the data provided by the 3D cameras.The originality of the processing method lies within the exploitation of orthoimages processed from the point clouds that are acquired in real time by the three cameras. The positions and trajectory of the mobile platform can be derived from the study of the differences between successive orthoimages.L’objectif de la thèse est de développer un nouveau système de localisation mobile composé de trois caméras 3D de type Kinect et d’une caméra additionnelle de type Fish Eye. La solution algorithmique est basée sur l’odométrie visuelle et permet de calculer la trajectoire du mobile en temps réel à partir des données fournies par les caméras 3D. L’originalité de la méthodologie réside dans l’exploitation d’orthoimages créées à partir des nuages de points acquis en temps réel par les trois caméras. L’étude des différences entre les orthoimages successives acquises par le système mobile permet d’en déduire ses positions successives et d’en calculer sa trajectoire

    LIDAR-Based road signs detection For Vehicle Localization in an HD Map

    No full text
    International audienceSelf-vehicle localization is one of the fundamental tasks for autonomous driving. Most of current techniques for global positioning are based on the use of GNSS (Global Navigation Satellite Systems). However, these solutions do not provide a localization accuracy that is better than 2-3 m in open sky environments [1]. Alternatively, the use of maps has been widely investigated for localization since maps can be pre-built very accurately. State of the art approaches often use dense maps or feature maps for localization. In this paper, we propose a road sign perception system for vehicle localization within a third party map. This is challenging since third party maps are usually provided with sparse geometric features which make the localization task more difficult in comparison to dense maps. The proposed approach extends the work in [2] where a localization system based on lane markings has been developed. Experiments have been conducted on a Highway-like test track using GNSS/INS with RTK corrections as ground truth (GT). Error evaluations are given as cross-track and along-track errors defined in the curvilinear coordinates [3] related to the map

    LIDAR-Based High Reflective Landmarks (HRL)s For Vehicle Localization in an HD Map

    No full text
    International audienceAccurate localization is very important to ensure performance and safety of autonomous vehicles. In particular, with the appearance of High Definition (HD) sparse geometric road maps, many research works have been focusing on the deployment of accurate localization systems in a previously built map. In this paper, we solve a localization problem by matching road perceptions from a 3D LIDAR sensor with HD map elements. The perception system detects High Reflective Landmarks (HRL) such as: lane markings, road signs and guard rail reflectors (GRR) from a 3D point cloud. A particle filtering algorithm estimates the position of the vehicle by matching observed HRLs with HD map attributes. The proposed approach extends our work in [1] and [2] where a localization system based on lane markings and road signs has been developed. Experiments have been conducted on a highway-like test track using GNSS/INS with RTK corrections as a ground truth (GT). Error evaluations are given as cross-track (CT) and along-track (AT) errors defined in the curvilinear coordinates [3] related to the map. The obtained accuracies of our localization system is 18 cm for the cross-track error and 32 cm for the along-track error

    Towards a Reference Data Generation Framework for Performance Assessment of Perception Systems

    No full text
    International audienceSensors and their associated data fusion techniques, play a crucial role in Autonomous Vehicle (AV) decision-making applications. Accurately evaluate performance and reliability of the perception sources is an important task to be able to know the consistency of this data fusion. In this paper, a reference data generation framework for assessing perception sensors performances is proposed. Our approach relies on the complementary use of three data sources: a highly precise 3D map with semantic information, a High Density range finder sensor and a GNSS-RTK/INS localization unit. 3D map provides semantic knowledge of the environment and HD range finder precisely senses ego-vehicle surroundings. Finally, 3D map and HD scans are geometrically associated using positioning information in order to combine them and to infer reference data. Thorough experiments were conducted to evaluate and validate the proposed approach. As a proof of concept, performances of a LiDAR-based road plane detection method were evaluated, quantified and reported in terms of precision and recall
    corecore